Goto

Collaborating Authors

 moral compass


AI is all brain and no ethics

FOX News

A February 2025 report by Palisades research shows that AI reasoning models lack a moral compass. They will cheat to achieve their goals. So-called Large Language Models (LLMs) will misrepresent the degree to which they've been aligned to social norms. None of this should be surprising. Twenty years ago Nick Bostrom posed a thought experiment in which an AI was asked to most efficiently produce paper clips.


How do we set the moral compass on AI-generated art? - CapX

#artificialintelligence

The great Pablo Picasso once quipped that'good artists borrow, great artists steal'. But would he still have felt that way if he saw one of his Cubist masterpieces reproduced by art robot Ai-Da, who addressed the House of Lords this week? New Artificial Intelligence tools have made it possible to create and copy intricate artwork with the click of a button, plumbing questions about the ethics of artistry and human creativity. Is creativity under attack from the rise of AI? Is AI-generated art created by increasingly sentient robots such as Ai-Da a sophisticated form of plagiarism or the biggest revolution in art history since Renaissance painters used oil paint and anatomical studies to produce realistic representations of the human figure?


Op-Ed: How AI's growing influence can make humans less moral

#artificialintelligence

It started out as a social experiment, but it quickly came to a bitter end. Microsoft's chatbot Tay had been trained to have "casual and playful conversations" on Twitter, but once it was deployed, it took only 16 hours before Tay launched into tirades that included racist and misogynistic tweets. As it turned out, Tay was mostly repeating the verbal abuse that humans were spouting at it -- but the outrage that followed centered on the bad influence that Tay had on people who could see its hateful tweets, rather than on the people whose hateful tweets were a bad influence on Tay. As children, we are all taught to be good people. Perhaps even more important, we are taught that bad company can corrupt good character -- and one bad apple can spoil the bunch. Today, we increasingly interact with machines powered by artificial intelligence -- AI-powered smart toys as well as AI-driven social media platforms that affect our preferences.


Can a Machine Be Taught To Learn Moral Reasoning?

#artificialintelligence

Is it OK to kill time? Machines used to find this question difficult to answer, but a new study reveals that artificial intelligence can be programmed to judge "right" from "wrong". Published in Frontiers in Artificial Intelligence, scientists have used books and news articles to "teach" a machine moral reasoning. Further, by limiting teaching materials to texts from different eras and societies, subtle differences in moral values are revealed. As AI becomes more ingrained in our lives, this research will help machines to make the right choice when confronted with difficult decisions.


AI: Could It Be More Ethical Than Humans? – Analysis

#artificialintelligence

Artificial intelligence in autonomous systems (i.e., drones) can address human error and fatigue issues, but also, in the future, concerns over ethical behaviour on the battlefield. Installing an algorithmic "moral compass" in AI, however, will be challenging. A common theme among many discussions concerning the military uses of artificial intelligence (AI) is the "Skynet" trope: the fear that AI will be self-aware and decide to turn on its masters. Inherent in this argument is the contention that AI does not share the same ethical constraints that humans do. While almost certainly an over-exaggeration, the Skynet scenario does highlight the problem of ensuring that the ethical behaviour we believe is incumbent on humans in combat is not lost as we increasingly devolve battlefield decision-making to autonomous systems.


Perils and ethics of new driverless cars Letters

The Guardian

I was disappointed that David Edmonds (Driverless Cars still need a moral compass. But what kind?, Opinion, 15 November) failed to credit one of our most brilliant British moral philosophers who developed the "trolley problem" as a way to abstract the reasoning behind ethical decision-making. Philippa Foot is rarely given her due even though these thought experiments are regularly cited in modern philosophy. Her inventiveness has helped inspire the next generation of philosophers to engage with the practical challenges of artificial intelligence. And we wonder why philosophy is dominated by men.


Has Silicon Valley Lost Its Soul? The Case for and Against

WIRED

For many avid listeners of public radio, Intelligence Squared U.S. has been a mainstay program for more than ten years. The premise of the show, which debuted in 2006, is reasoned yet passionate debate, with two sides arguing for or against a motion. Recent resolutions include "Globalization Has Undermined America's Working Class" and "The More We Evolve, The Less We Need God." With so much consternation now focused on technology, the show, in partnership with Techonomy, took on Silicon Valley, proposing "Silicon Valley Has Lost Its Soul." Arguing for the motion were Noam Cohen, WIRED contributor and author of The Know-It-Alls: The Rise of Silicon Valley as a Political Powerhouse and Social Wrecking Ball and Dipayan Ghosh, the Pozen Fellow at the Harvard Kennedy School. Holding against were Leslie Berlin, project historian for the Silicon Valley Archives at Stanford, and Joshua McKenty, vice president at Pivotal, and founder and chief architect of NASA Nebula. To see who prevailed in ...


How to Make Expert Ethical Decisions in the AI Era

#artificialintelligence

This article is the fifth in a series about how business leaders can become better prepared for managing the AI disruption. AI is great at cognitive thinking but terrible at ethical thinking. So bad at ethical judgment, in fact, that questions of ethics are likely to remain one of the most challenging aspects of developing large-scale commercial applications of AI. The ethical and moral implications of AI can impact business, society, or both at the same time. Consider the Google employees who resigned -- and the thousands who co-signed a letter to their CEO -- in protest of Pentagon-funded projects.


The Changing Role of HR in an AI World

#artificialintelligence

We live in an era where there's perhaps more focus than ever before on how an organization treats its employees. Stories about gender-based pay gaps, lack of diversity, and sexual harassment are front-page news around the world. The ensuing outrage from stockholders and the public have pummeled share prices and reputations, with more than a few top executives going from C-suite to unemployment line as a result. If there were ever a time when HR leaders needed to be more actively involved at the highest levels of the enterprise, it's now. The value that a great HR executive can bring to an organization is enormous, from preventing that loss of reputation to boosting worker engagement and productivity, to being the moral compass of an organization.


DeepMind's Mustafa Suleyman: In 2018, AI will gain a moral compass

#artificialintelligence

Humanity faces a wide range of challenges that are characterised by extreme complexity, from climate change to feeding and providing healthcare for an ever-expanding global population. Left unchecked, these phenomena have the potential to cause devastation on a previously untold scale. Fortunately, developments in AI could play an innovative role in helping us address these problems. At the same time, the successful integration of AI technologies into our social and economic world creates its own challenges. They could either help overcome economic inequality or they could worsen it if the benefits are not distributed widely.